9,485 research outputs found

    Four dimensional R^4 superinvariants through gauge completion

    Full text link
    We fully compute the N=1 supersymmetrization of the fourth power of the Weyl tensor in d=4 x-space with the auxiliary fields. In a previous paper, we showed that their elimination requires an infinite number of terms; we explicitely compute those terms to order \kappa^4 (three loop). We also write, in superspace notation, all the possible N=1 actions, in four dimensions, that contain pure R^4 terms (with coupling constants). We explicitely write these actions in terms of the \theta components of the chiral density \epsilon and the supergravity superfields R, G_m, W_{ABC}. Using the method of gauge completion, we compute the necessary \theta components which allow us to write these actions in x-space. We discuss under which circumstances can these extra R^4 correction terms be reabsorbed in the pure supergravity action, and their relevance to the quantum supergravity/string theory effective actions.Comment: 20 pages, no figures. Sec. 3 clarified; typos correcte

    Escape from attracting sets in randomly perturbed systems

    Get PDF
    The dynamics of escape from an attractive state due to random perturbations is of central interest to many areas in science. Previous studies of escape in chaotic systems have rather focused on the case of unbounded noise, usually assumed to have Gaussian distribution. In this paper, we address the problem of escape induced by bounded noise. We show that the dynamics of escape from an attractor's basin is equivalent to that of a closed system with an appropriately chosen "hole". Using this equivalence, we show that there is a minimum noise amplitude above which escape takes place, and we derive analytical expressions for the scaling of the escape rate with noise amplitude near the escape transition. We verify our analytical predictions through numerical simulations of a two-dimensional map with noise.Comment: up to date with published versio

    FCN-rLSTM: Deep Spatio-Temporal Neural Networks for Vehicle Counting in City Cameras

    Full text link
    In this paper, we develop deep spatio-temporal neural networks to sequentially count vehicles from low quality videos captured by city cameras (citycams). Citycam videos have low resolution, low frame rate, high occlusion and large perspective, making most existing methods lose their efficacy. To overcome limitations of existing methods and incorporate the temporal information of traffic video, we design a novel FCN-rLSTM network to jointly estimate vehicle density and vehicle count by connecting fully convolutional neural networks (FCN) with long short term memory networks (LSTM) in a residual learning fashion. Such design leverages the strengths of FCN for pixel-level prediction and the strengths of LSTM for learning complex temporal dynamics. The residual learning connection reformulates the vehicle count regression as learning residual functions with reference to the sum of densities in each frame, which significantly accelerates the training of networks. To preserve feature map resolution, we propose a Hyper-Atrous combination to integrate atrous convolution in FCN and combine feature maps of different convolution layers. FCN-rLSTM enables refined feature representation and a novel end-to-end trainable mapping from pixels to vehicle count. We extensively evaluated the proposed method on different counting tasks with three datasets, with experimental results demonstrating their effectiveness and robustness. In particular, FCN-rLSTM reduces the mean absolute error (MAE) from 5.31 to 4.21 on TRANCOS, and reduces the MAE from 2.74 to 1.53 on WebCamT. Training process is accelerated by 5 times on average.Comment: Accepted by International Conference on Computer Vision (ICCV), 201

    Understanding Traffic Density from Large-Scale Web Camera Data

    Full text link
    Understanding traffic density from large-scale web camera (webcam) videos is a challenging problem because such videos have low spatial and temporal resolution, high occlusion and large perspective. To deeply understand traffic density, we explore both deep learning based and optimization based methods. To avoid individual vehicle detection and tracking, both methods map the image into vehicle density map, one based on rank constrained regression and the other one based on fully convolution networks (FCN). The regression based method learns different weights for different blocks in the image to increase freedom degrees of weights and embed perspective information. The FCN based method jointly estimates vehicle density map and vehicle count with a residual learning framework to perform end-to-end dense prediction, allowing arbitrary image resolution, and adapting to different vehicle scales and perspectives. We analyze and compare both methods, and get insights from optimization based method to improve deep model. Since existing datasets do not cover all the challenges in our work, we collected and labelled a large-scale traffic video dataset, containing 60 million frames from 212 webcams. Both methods are extensively evaluated and compared on different counting tasks and datasets. FCN based method significantly reduces the mean absolute error from 10.99 to 5.31 on the public dataset TRANCOS compared with the state-of-the-art baseline.Comment: Accepted by CVPR 2017. Preprint version was uploaded on http://welcome.isr.tecnico.ulisboa.pt/publications/understanding-traffic-density-from-large-scale-web-camera-data
    corecore